Goto

Collaborating Authors

 linear autoencoder



A solvable high-dimensional model where nonlinear autoencoders learn structure invisible to PCA while test loss misaligns with generalization

Mendes, Vicente Conde, Bardone, Lorenzo, Koller, Cédric, Moreira, Jorge Medina, Erba, Vittorio, Troiani, Emanuele, Zdeborová, Lenka

arXiv.org Machine Learning

Many real-world datasets contain hidden structure that cannot be detected by simple linear correlations between input features. For example, latent factors may influence the data in a coordinated way, even though their effect is invisible to covariance-based methods such as PCA. In practice, nonlinear neural networks often succeed in extracting such hidden structure in unsupervised and self-supervised learning. However, constructing a minimal high-dimensional model where this advantage can be rigorously analyzed has remained an open theoretical challenge. We introduce a tractable high-dimensional spiked model with two latent factors: one visible to covariance, and one statistically dependent yet uncorrelated, appearing only in higher-order moments. PCA and linear autoencoders fail to recover the latter, while a minimal nonlinear autoencoder provably extracts both. We analyze both the population risk, and empirical risk minimization. Our model also provides a tractable example where self-supervised test loss is poorly aligned with representation quality: nonlinear autoencoders recover latent structure that linear methods miss, even though their reconstruction loss is higher.




Regularized linear autoencoders recover the principal components, eventually

Neural Information Processing Systems

While there has been rapid progress in understanding the learning dynamics of neural networks, most such work focuses on the networks' ability to fit input-output relationships. However, many machine learning problems require learning representations with general utility.


Pre-training of Recurrent Neural Networks via Linear Autoencoders

Neural Information Processing Systems

We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i.e. linear dynamical systems modelling the target sequences. We start by giving a closed form solution for the definition of the optimal weights of a linear autoencoder given a training set of sequences. This solution, however, is computationally very demanding, so we suggest a procedure to get an approximate solution for a given number of hidden units. The weights obtained for the linear autoencoder are then used as initial weights for the input-to-hidden connections of a recurrent neural network, which is then trained on the desired task. Using four well known datasets of sequences of polyphonic music, we show that the proposed pre-training approach is highly effective, since it allows to largely improve the state of the art results on all the considered datasets.


Bilateral Distribution Compression: Reducing Both Data Size and Dimensionality

Broadbent, Dominic, Whiteley, Nick, Allison, Robert, Lovett, Tom

arXiv.org Machine Learning

Existing distribution compression methods reduce dataset size by minimising the Maximum Mean Discrepancy (MMD) between original and compressed sets, but modern datasets are often large in both sample size and dimensionality. We propose Bilateral Distribution Compression (BDC), a two-stage framework that compresses along both axes while preserving the underlying distribution, with overall linear time and memory complexity in dataset size and dimension. Central to BDC is the Decoded MMD (DMMD), which quantifies the discrepancy between the original data and a compressed set decoded from a low-dimensional latent space. BDC proceeds by (i) learning a low-dimensional projection using the Reconstruction MMD (RMMD), and (ii) optimising a latent compressed set with the Encoded MMD (EMMD). We show that this procedure minimises the DMMD, guaranteeing that the compressed set faithfully represents the original distribution. Experiments show that across a variety of scenarios BDC can achieve comparable or superior performance to ambient-space compression at substantially lower cost.



Optimal Linear Baseline Models for Scientific Machine Learning

DeLise, Alexander, Loh, Kyle, Patel, Krish, Teague, Meredith, Arnold, Andrea, Chung, Matthias

arXiv.org Artificial Intelligence

In nearly every scientific discipline, a central challenge lies in modeling, computing, and understanding the functional relationships between signals, measurements, and their underlying physical processes. These mappings typically manifest in three fundamental forms: forward modeling, inference, and autoencoding. While mathematical models often provide insight into these relationships, they are frequently inadequate for real-world prediction and analysis due to limitations in analytical tractability, computational feasibility, or algorithmic robustness. The advent of scientific machine learning (ML) has led to a paradigm shift, where data-driven methods, particularly neural networks, have emerged as powerful tools for learning complex input-output relations directly from data. Unlike traditional model based approaches, neural networks are capable of overcoming longstanding issues such as computational complexity and scalability issues, model misspecification, and the ill-posedness inherent to many scientific problems [1]. A central strength of neural networks is their capacity to project inputs into a lower-dimensional latent space before mapping to targets, a principle commonly realized in autoencoder and encoder-decoder architectures.


Pre-training of Recurrent Neural Networks via Linear Autoencoders

Luca Pasa, Alessandro Sperduti

Neural Information Processing Systems

We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i.e. linear dynamical systems modelling the target sequences. We start by giving a closed form solution for the definition of the optimal weights of a linear autoencoder given a training set of sequences. This solution, however, is computationally very demanding, so we suggest a procedure to get an approximate solution for a given number of hidden units. The weights obtained for the linear autoencoder are then used as initial weights for the inputto-hidden connections of a recurrent neural network, which is then trained on the desired task. Using four well known datasets of sequences of polyphonic music, we show that the proposed pre-training approach is highly effective, since it allows to largely improve the state of the art results on all the considered datasets.